Guild icon
S3Drive
Community / support / Using Cloudflare's R2 (S3 Compatible)
Avatar
I am using Cloudflare's R2 S3 Compatible Object Storage. I used the S3Drive mobile application for Android. After successful login it shows this error and does not list the objects. MinioError: ListObjectsV2 search parameter maxKeys not implemented
👍 1
Avatar
Hi @secrethash and thanks for reporting this issue. In fact we haven't been using S3Drive with Cloudflare previously. It does seem that they doesn't support maxKeys parameter that we're using even though its support is advertised as implemented: https://developers.cloudflare.com/r2/api/s3/api/ We'll implement a workaround and have it deployed over a next few days if not sooner. I will let you know once it's released. Stay tuned ! (edited)
👍 1
Avatar
Thank you @Tom for your help. I appreciate it.
6:02 AM
Alternatively, should I report this issue to Cloudflare too? I also checked their api docs and they have mentioned to support maxKeys param.
Avatar
@secrethash Definitely, there is no harm reaching out to Cloudflare. We've addressed the max keys issue and waiting for Google to approve the release on Play Store which shall happen within next ~12-24 hours. Unfortunately in the meantime we've found out that there is no content-length header in the response (e.g. during file download/open). We rely on it to display transfer progress results as well as to make certain predicaments related to encryption. In theory we could implement content-length workaround, but it would take us little bit while. We're going investigate first whether it is possible to enable that header. I know that Cloudflare has some logic behind the content-length header which in some cases is provided and in some isn't... we'll have a look on it as well, however if you're going to reach out Cloudflare it is something you can ask about as well. Thanks (edited)
🔥 1
Avatar
Great @Tom I just got the update from Google Play Store and it works like a charm now.
5:49 PM
I will surely contact Cloudflare's support and address the issue to them. I will update you with there response too.
Avatar
Hey @Tom I guess the problem with maxKeys problem could be similar to this issue on community post. Could you please confirm? https://community.cloudflare.com/t/accessing-r2-from-databricks/506230
Hi all, I’m trying to read a dataset in r2 from databricks, but encountering an issue (full log below). Seems to be something related to incompatibility between R2 and S3. Has anyone encountered this and/or know of a solution? Thanks AWSBadRequestException: listStatus on s3a://indexed-xyz/ethereum/decoded/logs/v1.2.0/partition_key=ff/dt=2023: ...
Avatar
@secrethash Related to maxKeys, but underlying error message is different. Since in this post there is no full request headers/params I can't tell exactly.
6:28 PM
@secrethash ...also good news is that we've found another workaround how to resolve: content-length issue on mobile and desktop clients. Web client fix will have to wait little bit longer (because we've no control of content-length header in the browser). Basically if: accept-encoding HTTP request header includes gzip Cloudflare seem to skip content-length altogether. We're testing couple things right now, but if things go well we'll be able to release it promptly. Related: https://community.cloudflare.com/t/no-content-length-header-when-content-type-gzip/492964 (edited)
6:32 PM
Just to confirm, can I ask you how your Cloudflare R2 endpoint looks like? Is it something like: https://some_numbers_and_letters.r2.cloudflarestorage.com ?
Avatar
Avatar
Tom
Just to confirm, can I ask you how your Cloudflare R2 endpoint looks like? Is it something like: https://some_numbers_and_letters.r2.cloudflarestorage.com ?
Yes, it's basically https://{YOUR_ACCOUNT_ID}.r2.cloudflarestorage.com/ The Account ID is a 32 digit alpha-numeric ID.
Avatar
Avatar
Tom
@secrethash ...also good news is that we've found another workaround how to resolve: content-length issue on mobile and desktop clients. Web client fix will have to wait little bit longer (because we've no control of content-length header in the browser). Basically if: accept-encoding HTTP request header includes gzip Cloudflare seem to skip content-length altogether. We're testing couple things right now, but if things go well we'll be able to release it promptly. Related: https://community.cloudflare.com/t/no-content-length-header-when-content-type-gzip/492964 (edited)
That's great 🔥
6:47 PM
It looks like R2 is having some compatibility issues relating to S3 Driver. I hope they fix these issues soon.
Avatar
Hi @secrethash, we've released the 1.2.5 update (Android) which addresses these issues. I would appreciate if you could let me know if it all works for you now. 📁
Avatar
Hey @Tom , I gave it a go.. it works perfectly. Some downloads were failing in the previous version but after the update it works perfectly. Great work! I appreciate it a lot 🔥
👍 1
Avatar
That's fantastic. I really appreciate that you've took time to report this issue. If you have any other issues or feature ideas don't hesitate to reach out to me directly. We will be releasing this update to desktop clients over the next few days. Thanks !
Avatar
Getting a Socket exception when uploading files to Cloudflare R2. I posted a screenshot image in the #general area.
Avatar
Hi @GrahamC, thanks for reporting this. Just a question, do you have in-app E2E enabled? What's the approximate size of files that you upload? If you disable E2E (assuming you have it enabled) does it resolve this issue?
Avatar
Yes, E2E is enabled. Files are all 1.5 to 2.5 MB in size. I will try without E2E later today.
Avatar
I tried with E2E off and this made no difference. I also tried with a faster Internet connection (5 Mbps UL) and there is no problem. It only happens on a slow Internet connection (320 Kbps UL) and only on some of the files. With 28 files in the folder typically 8 or 9 would work and 19 or 20 would fail. With 6 files in the folder 4 or 5 would work and 1 or 2 would fail. Although the link speed is low the connection is completely stable. If anything can be done to make it more tolerant of low connection speeds this would be helpful.
Avatar
Avatar
GrahamC
I tried with E2E off and this made no difference. I also tried with a faster Internet connection (5 Mbps UL) and there is no problem. It only happens on a slow Internet connection (320 Kbps UL) and only on some of the files. With 28 files in the folder typically 8 or 9 would work and 19 or 20 would fail. With 6 files in the folder 4 or 5 would work and 1 or 2 would fail. Although the link speed is low the connection is completely stable. If anything can be done to make it more tolerant of low connection speeds this would be helpful.
Thanks, it's helpful for us, so we can build a reliable testing environment where we can reproduce it (and test the fix). We will be looking to address that one and it might be possible to resolve by allowing to set different connection buffer sizes.
Avatar
When an upload has failed, in the Failed column there is a retry icon (circular arrow), pressing this it says 'Queued file for upload' but no retry happens and nothing appears in the Waiting or Running columns.
Avatar
Avatar
GrahamC
When an upload has failed, in the Failed column there is a retry icon (circular arrow), pressing this it says 'Queued file for upload' but no retry happens and nothing appears in the Waiting or Running columns.
When retry is clicked, item gets added to the queue and the upload process starts immediately (unless it was running already), so in most cases it won't appear in Waiting column. The Running column will display it only after receiving first transfer update which on a slower connections might take longer than expected. Such state is undesirable as it creates some sort of vacuum where item isn't displayed anywhere. This is addressed already on desktop clients, but will be released later today on Android and iOS. In other words after item is picked from Waiting it shall appear immediately as Running. I am not sure if this is the issue that you have experienced though, as in principle item shall appear in Running anyway and eventually in Done/Failed and it doesn't seem to be case from what I understand. We will investigate it further. In the meantime we've also thought about adding "retry all" option to at alleviate the SocketException issue which may create too many entries to retry individually. I will let you know about progress on all that. Thanks !
Avatar
Avatar
GrahamC
When an upload has failed, in the Failed column there is a retry icon (circular arrow), pressing this it says 'Queued file for upload' but no retry happens and nothing appears in the Waiting or Running columns.
The "retry all" option is successfully deployed. There are more improvements regarding the upload process coming soon. If despite that you still experience an issue with retry being not effective please let me know, so we can be aware of improvements needed in that area as well. Thanks !
Avatar
This is to let you know that there will be improvement in this area this month with the release of a new streaming cipher: https://s3drive.canny.io/feature-requests/p/implement-chunked-encryption-using-stream-protocol
Protocol: https://github.com/miscreant/meta/wiki/STREAM This is to speed-up encryption/decryption of bigger files, since after some threshold we fallback to
Avatar
Hi @GrahamC, I was wondering if you're still experiencing the: "SocketException" issues? We've had a major release this month: https://s3drive.app/changelog which completely changed the way we process, buffer, encrypt, data when sending to S3. This may or may not fixed the issue, that's why I've thought I am going to check this out with you. Thanks ! (edited)
Exported 27 message(s)
Timezone: UTC+0